Team, Visitors, External Collaborators
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Estimation and control for Markov Processes

Participants: R. Azaïs, F. Bouguet, A. Gégout-Petit, F. Greciet, B. Scherrer

 

Piecewise-deterministic Markov processes form a class of stochastic models with a sizeable scope of applications. Such processes are defined by a deterministic motion punctuated by random jumps at random times, and offer simple yet challenging models to study. The issue of statistical estimation of the parameters ruling the jump mechanism is far from trivial. Responding to new developments in the field as well as to current research interests and needs, the book “Statistical Inference for Piecewise-deterministic Markov Processes” edited by Romain Azaïs and Florian Bouguet [33] gather 7 chapters by different authors on the topic. The idea for this book stemmed from a workshop organized in Nancy in the 2016-17 winter. Two chapters [48][31] have been co-authored by one or more BIGS members.

Multiple-step lookahead policies have demonstrated high empirical competence in Reinforcement Learning, via the use of Monte Carlo Tree Search or Model Predictive Control. In [13], multiple-step greedy policies and their use in vanilla Policy Iteration algorithms were proposed and analyzed. In [14], [12], we study multiple-step greedy algorithms in more practical setups: we describe and analyze a stochastic approximation variation and general sensitivity analyses to approximations. In [15], we describe a short study on an Anderson acceleration of the fixed point computation involved in Reinforcement Learning. These contributions resulted in one publication in ICML, in NeurIPS, and two in EWRL (the European Workshop on Reinforcement Learning).